17 research outputs found

    CUP: Comprehensive User-Space Protection for C/C++

    Full text link
    Memory corruption vulnerabilities in C/C++ applications enable attackers to execute code, change data, and leak information. Current memory sanitizers do no provide comprehensive coverage of a program's data. In particular, existing tools focus primarily on heap allocations with limited support for stack allocations and globals. Additionally, existing tools focus on the main executable with limited support for system libraries. Further, they suffer from both false positives and false negatives. We present Comprehensive User-Space Protection for C/C++, CUP, an LLVM sanitizer that provides complete spatial and probabilistic temporal memory safety for C/C++ program on 64-bit architectures (with a prototype implementation for x86_64). CUP uses a hybrid metadata scheme that supports all program data including globals, heap, or stack and maintains the ABI. Compared to existing approaches with the NIST Juliet test suite, CUP reduces false negatives by 10x (0.1%) compared to the state of the art LLVM sanitizers, and produces no false positives. CUP instruments all user-space code, including libc and other system libraries, removing them from the trusted code base

    Shadow Honeypots

    Get PDF
    We present Shadow Honeypots, a novel hybrid architecture that combines the best features of honeypots and anomaly detection. At a high level, we use a variety of anomaly detectors to monitor all traffic to a protected network or service. Traffic that is considered anomalous is processed by a "shadow honeypot" to determine the accuracy of the anomaly prediction. The shadow is an instance of the protected software that shares all internal state with a regular ("production") instance of the application, and is instrumented to detect potential attacks. Attacks against the shadow are caught, and any incurred state changes are discarded. Legitimate traffic that was misclassified will be validated by the shadow and will be handled correctly by the system transparently to the end user. The outcome of processing a request by the shadow is used to filter future attack instances and could be used to update the anomaly detector. Our architecture allows system designers to fine-tune systems for performance, since false positives will be filtered by the shadow. We demonstrate the feasibility of our approach in a proof-of-concept implementation of the Shadow Honeypot architecture for the Apache web server and the Mozilla Firefox browser. We show that despite a considerable overhead in the instrumentation of the shadow honeypot (up to 20% for Apache), the overall impact on the system is diminished by the ability to minimize the rate of false-positives

    Detecting Targeted Attacks Using Shadow Honeypots

    Get PDF
    We present Shadow Honeypots, a novel hybrid architecture that combines the best features of honeypots and anomaly detection. At a high level, we use a variety of anomaly detectors to monitor all traffic to a protected network/service. Traffic that is considered anomalous is processed by a "shadow honeypot'' to determine the accuracy of the anomaly prediction. The shadow is an instance of the protected software that shares all internal state with a regular ("production'') instance of the application, and is instrumented to detect potential attacks. Attacks against the shadow are caught, and any incurred state changes are discarded. Legitimate traffic that was misclassified will be validated by the shadow and will be handled correctly by the system transparently to the end user. The outcome of processing a request by the shadow is used to filter future attack instances and could be used to update the anomaly detector. Our architecture allows system designers to fine-tune systems for performance, since false positives will be filtered by the shadow. Contrary to regular honeypots, our architecture can be used both for server and client applications. We demonstrate the feasibility of our approach in a proof-of-concept implementation of the Shadow Honeypot architecture for the Apache web server and the Mozilla Firefox browser. We show that despite a considerable overhead in the instrumentation of the shadow honeypot (up to 20% for Apache), the overall impact on the system is diminished by the ability to minimize the rate of false-positives

    FreeGuard

    No full text
    In spite of years of improvements to software security, heap-related attacks still remain a severe threat. One reason is that many existing memory allocators fall short in a variety of aspects. For instance, performance-oriented allocators are designed with very limited countermeasures against attacks, but secure allocators generally suffer from significant performance overhead, e.g., running up to 10× slower. This paper, therefore, introduces FreeGuard, a secure memory allocator that prevents or reduces a wide range of heap-related attacks, such as heap overflows, heap over-reads, use-after-frees, as well as double and invalid frees. FreeGuard has similar performance to the default Linux allocator, with less than 2% overhead on average, but provides significant improvement to security guarantees. FreeGuard also addresses multiple implementation issues of existing secure allocators, such as the issue of scalability. Experimental results demonstrate that FreeGuard is very effective in defending against a variety of heap-related attacks

    Ανίχνευση Πρωτοεμφανιζόμενων Worms

    No full text
    The Internet is abound with computer security threats. Many high-visibility attacks involve network-borne, self-replicating programs, called worms. Recent incidents suggest that Internet worms can spread so fast that in-time human-mediated reaction is not possible, and therefore initial response to the outbreaks has to be automated. The first step towards combating new, unknown, so-called, zero-day worms is the ability to detect and identify them at the initial stages of their spread. In this work, we explore techniques for detecting zero-day worms. Our starting point is the observation that all worms to this day have included substantial commonality among their instances. Based on this observation we present a novel method for detecting new worms called EAR, based on identifying similar packet contents directed to multiple destination hosts. We evaluate our method using real traffic traces that contain real worms. Our results suggest that our approach is able to identify novel worms while at the same time the generated false alarms reach as low as zero percent. However, it is possible for attackers to obfuscate attacks so that no common substring can be used as a characteristic signature. To address this problem, we have designed a new buffer-overflow attack detection heuristic, called STRIDE, that offers three main improvements over previous work: it detects several types of polymorphic attacks that other techniques are blind to, has a lower rate of false positives, and is significantly more computationally efficient, and hence more suitable for use at the network-level. Finally, we have integrated these detection techniques into a passive network monitoring application that will identify new worms and extract signatures in the form of content substrings, destination port numbers, and address black-lists to block the worm at the network level.Το διαδίκτυο είναι γεμάτο από απειλές για την ασφάλεια των Η/Υ. Πολλές επιθέσεις μεγάλης κλίμακας εμπλέκουν αυτο-αναπαραγώμενα προγράμματα που μεταδίδονται από υπολογιστή σε υπολογιστή μέσω του δικτύου και στην πολύχρωμη ορολογία των υπολογιστών ονομάζονται σκουλήκια του διαδικτύου (Internet worms). Πρόσφατα περιστατικά δείχνουν ότι τα worms μπορούν να εξαπλωθούν τόσο γρήγορα ώστε οποιαδήποτε έγκαιρη αντιμετώπιση που στηρίζεται σε ανθρώπινη παρέμβαση είναι αδύνατη, και ώς εκ τούτου, η αρχική αντίδραση σε μια επιδημία πρέπει να είναι αυτόματη. Το πρώτο βήμα για την αντιμετώπιση άγνωστων μέχρι τώρα worms είναι η απόκτηση της δυνατότητας να ανιχνευτούν και να χαρακτηριστούν κατά τα πρώτα στάδια της εξάπλωσης τους. Σε αυτήν την εργασία εξερευνούμε τεχνικές για την ανίχνευση πρωτοεμφανιζόμενων worms. Σημείο εκκίνησης είναι η παρατήρηση ότι όλα τα μέχρι σήμερα γνωστά worms είχαν μεγάλες ομοιότητες μεταξύ των δειγμάτων τους. Στηριζόμενοι σε αυτήν την παρατήρηση παρουσιάζουμε μια μέθοδο ανίχνευσης νέων worms που ονομάζουμε EAR και στηρίζεται στην αναγνώριση ομοιότητας μεταξύ των περιεχομένων των πακέτων που αποστέλονται σε πολλούς προορισμούς. Αξιολογούμε τη μέθοδο χρησιμοποιόντας πραγματική κίνηση που περιέχει πραγματικα worms. Τα αποτελέσματα δείχνουν ότι η μέθοδος μπορεί να ανιχνεύσει νέα worms ενώ ταυτόχρονα ο αριθμός των ψευδών συναγερμών μπορεί να μειωθεί μέχρι και το μηδέν. Όμως, είναι δυνατόν, αν και δεν έχει παρατηρηθεί ακόμα στο διαδίκτυο, οι επιθέσεις να συσκοτιστούν με τρόπο ώστε να μην έχουν κοινά περιεχόμενα μεταξύ των διαφορετικών δειγμάτων του ίδιου worm. Για να αντιμετωπίσουμε αυτό το πρόβλημα, σχεδιάσαμε έναν νέο μηχανισμό ανίχνευσης που ονομάζεται STRIDE και προσφέρει τις παρακάτω τρείς βελτιώσεις ως προς τους προηγούμενους του είδους του: ανιχνεύει επιπλέον είδη επιθέσεων που ξεφεύγουν από προηγούμενες τεχνικές, έχει χαμηλότερο αριθμό από ψευδείς συναγερμούς, και είναι σημαντικά πιο φθηνός υπολογιστικά, και άρα πιο κατάλληλος για χρήση στο επίπεδο του δικτύου. Τέλος, ενσωματώσαμε τους παραπάνω μηχανισμούς σε μια εφαρμογή παθητικής εποπτείας του δικτύου που ανιχνεύει πρωτοεμφανιζόμα worms και παράγει υπογραφές με την μορφή περιεχομένων πακέτων, αριθμού της υπηρεσίας που προσβάλλεται, ή μαύρης λίστας από υπολογιστές που εξαπλώνουν την επιδημία
    corecore